31 research outputs found

    An Optimizing Method for Performance and Resource Utilization in Quantum Machine Learning Circuits

    Get PDF
    Quantum computing is a new and advanced topic that refers to calculations based on the principles of quantum mechanics. Itmakes certain kinds of problems be solved easier compared to classical computers. This advantage of quantum computingcan be used to implement many existing problems in different fields incredibly effectively. One important field that quantumcomputing has shown great results in machine learning. Until now, many different quantum algorithms have been presented toperform different machine learning approaches. In some special cases, the execution time of these quantum algorithms will bereduced exponentially compared to the classical ones. But at the same time, with increasing data volume and computationtime, taking care of systems to prevent unwanted interactions with the environment can be a daunting task and since thesealgorithms work on machine learning problems, which usually includes big data, their implementation is very costly in terms ofquantum resources. Here, in this paper, we have proposed an approach to reduce the cost of quantum circuits and to optimizequantum machine learning circuits in particular. To reduce the number of resources used, in this paper an approach includingdifferent optimization algorithms is considered. Our approach is used to optimize quantum machine learning algorithms forbig data. In this case, the optimized circuits run quantum machine learning algorithms in less time than the original onesand by preserving the original functionality. Our approach improves the number of quantum gates by 10.7% and 14.9% indifferent circuits and the number of time steps is reduced by three and 15 units, respectively. This is the amount of reduction forone iteration of a given sub-circuit U in the main circuit. For cases where this sub-circuit is repeated more times in the maincircuit, the optimization rate is increased. Therefore, by applying the proposed method to circuits with big data, both cost andperformance are improved

    Multimodal database of emotional speech, video and gestures

    Get PDF
    People express emotions through different modalities. Integration of verbal and non-verbal communication channels creates a system in which the message is easier to understand. Expanding the focus to several expression forms can facilitate research on emotion recognition as well as human-machine interaction. In this article, the authors present a Polish emotional database composed of three modalities: facial expressions, body movement and gestures, and speech. The corpora contains recordings registered in studio conditions, acted out by 16 professional actors (8 male and 8 female). The data is labeled with six basic emotions categories, according to Ekman’s emotion categories. To check the quality of performance, all recordings are evaluated by experts and volunteers. The database is available to academic community and might be useful in the study on audio-visual emotion recognition

    A genetic programming approach to development of clinical prediction models: A case study in symptomatic cardiovascular disease

    Get PDF
    BACKGROUND:Genetic programming (GP) is an evolutionary computing methodology capable of identifying complex, non-linear patterns in large data sets. Despite the potential advantages of GP over more typical, frequentist statistical approach methods, its applications to survival analyses are rare, at best. The aim of this study was to determine the utility of GP for the automatic development of clinical prediction models. METHODS:We compared GP against the commonly used Cox regression technique in terms of the development and performance of a cardiovascular risk score using data from the SMART study, a prospective cohort study of patients with symptomatic cardiovascular disease. The composite endpoint was cardiovascular death, non-fatal stroke, and myocardial infarction. A total of 3,873 patients aged 19-82 years were enrolled in the study 1996-2006. The cohort was split 70:30 into derivation and validation sets. The derivation set was used for development of both GP and Cox regression models. These models were then used to predict the discrete hazards at t = 1, 3, and 5 years. The predictive ability of both models was evaluated in terms of their risk discrimination and calibration using the validation set. RESULTS:The discrimination of both models was comparable. At time points t = 1, 3, and 5 years the C-index was 0.59, 0.69, 0.64 and 0.66, 0.70, 0.70 for the GP and Cox regression models respectively. At the same time points, the calibration of both models, which was assessed using calibration plots and the generalization of the Hosmer-Lemeshow test statistic, was also comparable, but with the Cox model being better calibrated to the validation data. CONCLUSION:Using empirical data, we demonstrated that a prediction model developed automatically by GP has predictive ability comparable to that of manually tuned Cox regression. The GP model was more complex, but it was developed in a fully automated way and comprised fewer covariates. Furthermore, it did not require the expertise normally needed for its derivation, thereby alleviating the knowledge elicitation bottleneck. Overall, GP demonstrated considerable potential as a method for the automated development of clinical prediction models for diagnostic and prognostic purposes

    Approximation of phenol concentration using novel hybrid computational intelligence methods

    No full text
    This paper presents two innovative evolutionary-neural systems based on feed-forward and recurrent neural networks used for quantitative analysis. These systems have been applied for approximation of phenol concentration. Their performance was compared against the conventional methods of artificial intelligence (artificial neural networks, fuzzy logic and genetic algorithms). The proposed systems are a combination of data preprocessing methods, genetic algorithms and the Levenberg–Marquardt (LM) algorithm used for learning feed forward and recurrent neural networks. The initial weights and biases of neural networks chosen by the use of a genetic algorithm are then tuned with an LM algorithm. The evaluation is made on the basis of accuracy and complexity criteria. The main advantage of proposed systems is the elimination of random selection of the network weights and biases, resulting in increased efficiency of the systems

    Towards real-time heartbeat classification : evaluation of nonlinear morphological features and voting method

    Get PDF
    Abnormal heart rhythms are one of the significant health concerns worldwide. The current state-of-the-art to recognize and classify abnormal heartbeats is manually performed by visual inspection by an expert practitioner. This is not just a tedious task; it is also error prone and, because it is performed, post-recordings may add unnecessary delay to the care. The real key to the fight to cardiac diseases is real-time detection that triggers prompt action. The biggest hurdle to real-time detection is represented by the rare occurrences of abnormal heartbeats and even more are some rare typologies that are not fully represented in signal datasets; the latter is what makes it difficult for doctors and algorithms to recognize them. This work presents an automated heartbeat classification based on nonlinear morphological features and a voting scheme suitable for rare heartbeat morphologies. Although the algorithm is designed and tested on a computer, it is intended ultimately to run on a portable i.e., field-programmable gate array (FPGA) devices. Our algorithm tested on Massachusetts Institute of Technology-Beth Israel Hospital(MIT-BIH) database as per Association for the Advancement of Medical Instrumentation(AAMI) recommendations. The simulation results show the superiority of the proposed method, especially in predicting minority groups: the fusion and unknown classes with 90.4% and 100%
    corecore